Mix-up training approaches have proven to be effective in improving the generalization ability of Deep Neural Networks. Over the years, the research community expands mix-up methods into two directions, with extensive efforts to improve saliency-guided procedures but minimal focus on the arbitrary path, leaving the randomization domain unexplored. In this paper, inspired by the superior qualities of each direction over one another, we introduce a novel method that lies at the junction of the two routes. By combining the best elements of randomness and saliency utilization, our method balances speed, simplicity, and accuracy. We name our method R-Mix following the concept of "Random Mix-up". We demonstrate its effectiveness in generalization, weakly supervised object localization, calibration, and robustness to adversarial attacks. Finally, in order to address the question of whether there exists a better decision protocol, we train a Reinforcement Learning agent that decides the mix-up policies based on the classifier's performance, reducing dependency on human-designed objectives and hyperparameter tuning. Extensive experiments further show that the agent is capable of performing at the cutting-edge level, laying the foundation for a fully automatic mix-up. Our code is released at [https://github.com/minhlong94/Random-Mixup].
translated by 谷歌翻译
卷积和自我关注是表示学习的两个强大的技术,通常被认为是两个与彼此不同的对等方法。在本文中,我们表明它们之间存在强烈的潜在关系,从而在这两个范式的大部分计算实际上以相同的操作完成。具体来说,我们首先表明,具有内核大小k x k的传统卷积可以分解为k ^ 2个单独的1x1卷积,然后是换档和求和操作。然后,我们将自我注意模块中的查询,键和值解释为多个1x1卷积,然后计算注意力权重和值的聚合。因此,两个模块的第一阶段包括类似的操作。更重要的是,第一阶段有助于与第二阶段相比的主导计算复杂性(信道大小的正方形)。这种观察结果自然导致这两个看似独特的范例的优雅集成,即享有自我关注和卷积(ACMIX)的益处的混合模型,同时与纯卷积或自我关注对应相比具有最小的计算开销。广泛的实验表明,我们的模型在图像识别和下游任务上持续改进了竞争基础的结果。代码和预先训练的型号将在https://github.com/panxuran/acmix和https://gitee.com/mindspore/models发布。
translated by 谷歌翻译
机器学习已经证明了通过I.I.D数据的显着预测准确性,但准确性通常会在从另一个分发中测试数据时掉落。在本文中,我们的目标是在假设这种精度下降背后的原因是透视图的另一个问题,这是对模型对不良好的特征的依赖性,其中数据注释员如何在这两个数据集中进行类似的情况。我们将这些功能称为未对准功能。我们将传统的泛化误差扩展到该设置的新概率,以了解未对齐的功能如何与标签相关联。我们的分析为该问题提供了一组技术,并且这些技术自然地与鲁棒机学习文献中的许多方法相关联。我们还将这些方法的经验强度进行了比较,在这些方法组合时表现出性能。
translated by 谷歌翻译
Bayesian networks (BNs) are a widely used graphical model in machine learning for representing knowledge with uncertainty. The mainstream BN structure learning methods require performing a large number of conditional independence (CI) tests. The learning process is very time-consuming, especially for high-dimensional problems, which hinders the adoption of BNs to more applications. Existing works attempt to accelerate the learning process with parallelism, but face issues including load unbalancing, costly atomic operations and dominant parallel overhead. In this paper, we propose a fast solution named Fast-BNS on multi-core CPUs to enhance the efficiency of the BN structure learning. Fast-BNS is powered by a series of efficiency optimizations including (i) designing a dynamic work pool to monitor the processing of edges and to better schedule the workloads among threads, (ii) grouping the CI tests of the edges with the same endpoints to reduce the number of unnecessary CI tests, (iii) using a cache-friendly data storage to improve the memory efficiency, and (iv) generating the conditioning sets on-the-fly to avoid extra memory consumption. A comprehensive experimental study shows that the sequential version of Fast-BNS is up to 50 times faster than its counterpart, and the parallel version of Fast-BNS achieves 4.8 to 24.5 times speedup over the state-of-the-art multi-threaded solution. Moreover, Fast-BNS has a good scalability to the network size as well as sample size. Fast-BNS source code is freely available at https://github.com/jjiantong/FastBN.
translated by 谷歌翻译
Bayesian networks (BNs) are attractive, because they are graphical and interpretable machine learning models. However, exact inference on BNs is time-consuming, especially for complex problems. To improve the efficiency, we propose a fast BN exact inference solution named Fast-BNI on multi-core CPUs. Fast-BNI enhances the efficiency of exact inference through hybrid parallelism that tightly integrates coarse- and fine-grained parallelism. We also propose techniques to further simplify the bottleneck operations of BN exact inference. Fast-BNI source code is freely available at https://github.com/jjiantong/FastBN.
translated by 谷歌翻译
Pre-trained Language Models (PLMs) which are trained on large text corpus through the self-supervised learning method, have yielded promising performance on various tasks in Natural Language Processing (NLP). However, though PLMs with huge parameters can effectively possess rich knowledge learned from massive training text and benefit downstream tasks at the fine-tuning stage, they still have some limitations such as poor reasoning ability due to the lack of external knowledge. Incorporating knowledge into PLMs has been tried to tackle these issues. In this paper, we present a comprehensive review of Knowledge-Enhanced Pre-trained Language Models (KE-PLMs) to provide a clear insight into this thriving field. We introduce appropriate taxonomies respectively for Natural Language Understanding (NLU) and Natural Language Generation (NLG) to highlight the focus of these two kinds of tasks. For NLU, we take several types of knowledge into account and divide them into four categories: linguistic knowledge, text knowledge, knowledge graph (KG), and rule knowledge. The KE-PLMs for NLG are categorized into KG-based and retrieval-based methods. Finally, we point out some promising future directions of KE-PLMs.
translated by 谷歌翻译
我们介绍了忙碌的板,这是一种受玩具启发的机器人学习环境,它利用一组铰接的对象和对象间功能关系,为机器人交互提供丰富的视觉反馈。基于这种环境,我们介绍了一个学习框架,即Busughbot,该框架允许代理商以综合和自欺欺人的方式共同获得三个基本功能(互动,推理和计划)。凭借繁忙板提供的丰富感官反馈,Busudbot首先学习了有效与环境互动的政策;然后,随着使用该策略收集的数据,Busybot的原因是通过因果发现网络对象间功能关系;最后,通过结合学习的交互政策和关系推理技能,代理可以执行目标条件的操纵任务。我们在模拟环境和现实环境中评估了忙碌的机器人,并验证了其看不见的对象和关系的概括性。视频可从https://youtu.be/ej98xbjz9ek获得。
translated by 谷歌翻译
Transformers have been shown to be able to perform deductive reasoning on a logical rulebase containing rules and statements written in English natural language. While the progress is promising, it is currently unclear if these models indeed perform logical reasoning by understanding the underlying logical semantics in the language. To this end, we propose RobustLR, a suite of evaluation datasets that evaluate the robustness of these models to minimal logical edits in rulebases and some standard logical equivalence conditions. In our experiments with RoBERTa and T5, we find that the models trained in prior works do not perform consistently on the different perturbations in RobustLR, thus showing that the models are not robust to the proposed logical perturbations. Further, we find that the models find it especially hard to learn logical negation and disjunction operators. Overall, using our evaluation sets, we demonstrate some shortcomings of the deductive reasoning-based language models, which can eventually help towards designing better models for logical reasoning over natural language. All the datasets and code base have been made publicly available.
translated by 谷歌翻译
联邦学习一直是一个热门的研究主题,使不同组织的机器学习模型的协作培训在隐私限制下。随着研究人员试图支持更多具有不同隐私方法的机器学习模型,需要开发系统和基础设施,以便于开发各种联合学习算法。类似于Pytorch和Tensorflow等深度学习系统,可以增强深度学习的发展,联邦学习系统(FLSS)是等效的,并且面临各个方面的面临挑战,如有效性,效率和隐私。在本调查中,我们对联合学习系统进行了全面的审查。为实现流畅的流动和引导未来的研究,我们介绍了联合学习系统的定义并分析了系统组件。此外,我们根据六种不同方面提供联合学习系统的全面分类,包括数据分布,机器学习模型,隐私机制,通信架构,联合集市和联合的动机。分类可以帮助设计联合学习系统,如我们的案例研究所示。通过系统地总结现有联合学习系统,我们展示了设计因素,案例研究和未来的研究机会。
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译